Multimodal Open-Domain Conversations with the Nao Robot
نویسندگان
چکیده
In this paper we discuss the design of human-robot interaction focussing especially on social robot communication and multimodal information presentation. As a starting point we use the WikiTalk application, an open-domain conversational system which has been previously developed using a robotics simulator. We describe how it can be implemented on the Nao robot platform, enabling Nao to make informative spoken contributions on a wide range of topics during conversation. Spoken interaction is further combined with gesturing in order to support Nao’s presentation by natural multimodal capabilities, and to enhance and explore natural communication between human users and robots.
منابع مشابه
Speech, gaze and gesturing: multimodal conversational interaction with Nao robot
The paper presents a multimodal conversational interaction system for the Nao humanoid robot. The system was developed at the 8th International Summer Workshop on Multimodal Interfaces, Metz, 2012. We implemented WikiTalk, an existing spoken dialogue system for open-domain conversations, on Nao. This greatly extended the robot’s interaction capabilities by enabling Nao to talk about an unlimite...
متن کاملOpen-Domain Conversation with a NAO Robot
for demonstration Abstract—In this demo, we present a multimodal conversation system, implemented using a Nao robot and Wikipedia. The system was developed at the 8th International Workshop on Multimodal Interfaces in Metz, France, 2012. The system is based on an interactive, open-domain spoken dialogue system called WikiTalk, which guides the user through conversations based on the link struct...
متن کاملEvaluation of WikiTalk - User Studies of Human-Robot Interaction
The paper concerns the evaluation of Nao WikiTalk, an application that enables a Nao robot to serve as a spoken open-domain knowledge access system. With Nao WikiTalk the robot can talk about any topic the user is interested in, using Wikipedia as its knowledge source. The robot suggests some topics to start with, and the user shifts to related topics by speaking their names after the robot men...
متن کاملAchieving Multimodal Cohesion during Intercultural Conversations
How do English as a lingua franca (ELF) speakers achieve multimodal cohesion on the basis of their specific interests and cultural backgrounds? From a dialogic and collaborative view of communication, this study focuses on how verbal and nonverbal modes cohere together during intercultural conversations. The data include approximately 160-minute transcribed video recordings of ELF interactions ...
متن کاملGenerating Co-speech Gestures for the Humanoid Robot NAO through BML
We develop an expressive gesture model based on GRETA platform to generate gestures accompanying speech for different embodiments. This paper presents our ongoing work on an implementation of this model on the humanoid robot NAO. From a specification of multimodal behaviors encoded with the behavior markup language, BML, the system synchronizes and realizes the verbal and nonverbal behaviors on...
متن کامل